250 research outputs found

    The Buffered \pi-Calculus: A Model for Concurrent Languages

    Full text link
    Message-passing based concurrent languages are widely used in developing large distributed and coordination systems. This paper presents the buffered π\pi-calculus --- a variant of the π\pi-calculus where channel names are classified into buffered and unbuffered: communication along buffered channels is asynchronous, and remains synchronous along unbuffered channels. We show that the buffered π\pi-calculus can be fully simulated in the polyadic π\pi-calculus with respect to strong bisimulation. In contrast to the π\pi-calculus which is hard to use in practice, the new language enables easy and clear modeling of practical concurrent languages. We encode two real-world concurrent languages in the buffered π\pi-calculus: the (core) Go language and the (Core) Erlang. Both encodings are fully abstract with respect to weak bisimulations

    Symbolic bisimulation for quantum processes

    Full text link
    With the previous notions of bisimulation presented in literature, to check if two quantum processes are bisimilar, we have to instantiate the free quantum variables of them with arbitrary quantum states, and verify the bisimilarity of resultant configurations. This makes checking bisimilarity infeasible from an algorithmic point of view because quantum states constitute a continuum. In this paper, we introduce a symbolic operational semantics for quantum processes directly at the quantum operation level, which allows us to describe the bisimulation between quantum processes without resorting to quantum states. We show that the symbolic bisimulation defined here is equivalent to the open bisimulation for quantum processes in the previous work, when strong bisimulations are considered. An algorithm for checking symbolic ground bisimilarity is presented. We also give a modal logical characterisation for quantum bisimilarity based on an extension of Hennessy-Milner logic to quantum processes.Comment: 30 pages, 7 figures, comments are welcom

    ResMatch: Residual Attention Learning for Local Feature Matching

    Full text link
    Attention-based graph neural networks have made great progress in feature matching learning. However, insight of how attention mechanism works for feature matching is lacked in the literature. In this paper, we rethink cross- and self-attention from the viewpoint of traditional feature matching and filtering. In order to facilitate the learning of matching and filtering, we inject the similarity of descriptors and relative positions into cross- and self-attention score, respectively. In this way, the attention can focus on learning residual matching and filtering functions with reference to the basic functions of measuring visual and spatial correlation. Moreover, we mine intra- and inter-neighbors according to the similarity of descriptors and relative positions. Then sparse attention for each point can be performed only within its neighborhoods to acquire higher computation efficiency. Feature matching networks equipped with our full and sparse residual attention learning strategies are termed ResMatch and sResMatch respectively. Extensive experiments, including feature matching, pose estimation and visual localization, confirm the superiority of our networks

    Proving Expected Sensitivity of Probabilistic Programs with Randomized Variable-Dependent Termination Time

    Get PDF
    The notion of program sensitivity (aka Lipschitz continuity) specifies that changes in the program input result in proportional changes to the program output. For probabilistic programs the notion is naturally extended to expected sensitivity. A previous approach develops a relational program logic framework for proving expected sensitivity of probabilistic while loops, where the number of iterations is fixed and bounded. In this work, we consider probabilistic while loops where the number of iterations is not fixed, but randomized and depends on the initial input values. We present a sound approach for proving expected sensitivity of such programs. Our sound approach is martingale-based and can be automated through existing martingale-synthesis algorithms. Furthermore, our approach is compositional for sequential composition of while loops under a mild side condition. We demonstrate the effectiveness of our approach on several classical examples from Gambler's Ruin, stochastic hybrid systems and stochastic gradient descent. We also present experimental results showing that our automated approach can handle various probabilistic programs in the literature

    Characterising Testing Preorders for Finite Probabilistic Processes

    Full text link
    In 1992 Wang & Larsen extended the may- and must preorders of De Nicola and Hennessy to processes featuring probabilistic as well as nondeterministic choice. They concluded with two problems that have remained open throughout the years, namely to find complete axiomatisations and alternative characterisations for these preorders. This paper solves both problems for finite processes with silent moves. It characterises the may preorder in terms of simulation, and the must preorder in terms of failure simulation. It also gives a characterisation of both preorders using a modal logic. Finally it axiomatises both preorders over a probabilistic version of CSP.Comment: 33 page

    Generalizing Deep Models for Overhead Image Segmentation Through Getis-Ord Gi* Pooling

    Get PDF
    That most deep learning models are purely data driven is both a strength and a weakness. Given sufficient training data, the optimal model for a particular problem can be learned. However, this is usually not the case and so instead the model is either learned from scratch from a limited amount of training data or pre-trained on a different problem and then fine-tuned. Both of these situations are potentially suboptimal and limit the generalizability of the model. Inspired by this, we investigate methods to inform or guide deep learning models for geospatial image analysis to increase their performance when a limited amount of training data is available or when they are applied to scenarios other than which they were trained on. In particular, we exploit the fact that there are certain fundamental rules as to how things are distributed on the surface of the Earth and these rules do not vary substantially between locations. Based on this, we develop a novel feature pooling method for convolutional neural networks using Getis-Ord Gi* analysis from geostatistics. Experimental results show our proposed pooling function has significantly better generalization performance compared to a standard data-driven approach when applied to overhead image segmentation
    • …
    corecore